47 research outputs found
Dynamic Visualisation of Many-Objective Populations
This is the author accepted manuscript. The final version is available from the Operational Research SocietyThere has been an increase in research activity recently regarding the visualisation of many-objective populations. Two of the main drivers for this have been (i) to aid decision makers in comparing and selecting designs returned from a many-objective optimisation run, and (ii) to help in the selection of solutions in interactive optimisation. In both of these situations there is often a dynamic element – populations evolving over time change their relative relationships, and the quality comparison measure itself can be altered, redefining member relations. Here we illustrate how a number of existing visualisations from various domains may be applied to many-objective populations to aid the understanding of population relations using the d3 package. d3 is inherently dynamic, and will automatically respond to any changes in the base document underpinning the visualisation, allowing the visualisation package to 'bolt-on' to any other program that can produce or update the underlying file
Computationally Efficient Local Optima Network Construction
The codebase for this paper is available at https://github.com/fieldsend/local_optima_networksThere has been an increasing amount of research on the visualisation
of search landscapes through the use of exact and approximate
local optima networks (LONs). Although there are many papers
available describing the construction of a LON, there is a dearth
of code released to support the general practitioner constructing
a LON for their problem. Furthermore, a naive implementation of
the algorithms described in work on LONs will lead to inefficient
and costly code, due to the possibility of repeatedly reevaluating
neighbourhood members, and partially overlapping greedy paths.
Here we discuss algorithms for the efficient computation of both
exact and approximate LONs, and provide open source code online.
We also provide some empirical illustrations of the reduction in the
number of recursive greedy calls, and quality function calls that can
be obtained on NK model landscapes, and discretised versions of
the IEEE CEC 2013 niching competition tests functions, using the
developed framework compared to naive implementations. In many
instances multiple order of magnitude improvements are observed.This work was supported by the Engineering and Physical Sciences
Research Council [grant number EP/N017846/1]. The author would
like to thank Sébastien Vérel and Gabriela Ochoa for providing
inspirational invited talks on LONs at the University of Exeter
during this grant, and also Ozgur Akman, Khulood Alyahya and
Kevin Doherty
Efficient Real-Time Hypervolume Estimation with Monotonically Reducing Error
This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordThe codebase for this paper is available at https://github.com/fieldsend/hypervolumeThe hypervolume (or S-metric) is a widely used quality measure
employed in the assessment of multi- and many-objective evolutionary algorithms. It is also directly integrated as a component in
the selection mechanism of some popular optimisers. Exact hypervolume calculation becomes prohibitively expensive in real-time
applications as the number of objectives increases and/or the approximation set grows. As such, Monte Carlo (MC) sampling is often
used to estimate its value rather than exactly calculating it. This
estimation is inevitably subject to error. As standard with Monte
Carlo approaches, the standard error decreases with the square
root of the number of MC samples. We propose a number of realtime hypervolume estimation methods for unconstrained archives
— principally for use in real-time convergence analysis. Furthermore, we show how the number of domination comparisons can be
considerably reduced by exploiting incremental properties of the
approximated Pareto front. In these methods the estimation error
monotonically decreases over time for (i) a capped budget of samples per algorithm generation and (ii) a fixed budget of dedicated
computation time per optimiser generation for new MC samples.
Results are provided using an illustrative worst-case scenario with
rapid archive growth, demonstrating the orders-of-magnitude of
speed-up possible.Engineering and Physical Sciences Research Council (EPSRC)Innovate U
Evolutionary multi-path routing for network lifetime and robustness in wireless sensor networks
publisher: Elsevier articletitle: Evolutionary multi-path routing for network lifetime and robustness in wireless sensor networks journaltitle: Ad Hoc Networks articlelink: http://dx.doi.org/10.1016/j.adhoc.2016.08.005 content_type: article copyright: © 2016 Elsevier B.V. All rights reserved
A Framework of Fog Computing: Architecture, Challenges and Optimization
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Fog Computing (FC) is an emerging distributed computing platform aimed at bringing computation close to its data sources, which can reduce the latency and cost of delivering data to a remote cloud. This feature and related advantages are desirable for many Internet-of-Things applications, especially latency sensitive and mission intensive services. With comparisons to other computing technologies, the definition and architecture of FC are presented in this article. The framework of resource allocation for latency reduction combined with reliability, fault tolerance, privacy, and underlying optimization problems are also discussed. We then investigate an application scenario and conduct resource optimization by formulating the optimization problem and solving it via a Genetic Algorithm. The resulting analysis generates some important insights on the scalability of FC systems.This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/P020224/1] and the EU FP7 QUICK project under Grant Agreement No. PIRSES-GA-2013-612652. Yang Liu was supported by the Chinese Research Council
Landscape Analysis Under Measurement Error
This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordThere are situations where the need for optimisation with a global precision tolerance arises — for example, due to measurement, numerical or evaluation errors in the objective function. In such situations, a global tolerance ε > 0 can be predefined such that two objective values are declared equal if the absolute difference between them is less than or equal to ε. This paper presents an overview of fitness landscape analysis under such conditions. We describe the formulation of common landscape categories in the presence of a global precision tolerance. We then proceed by dis- cussing issues that can emerge as a result of using tolerance, such as the increase in the neutrality of the fitness landscape. To this end, we propose two methods to exhaustively explore plateaus in such application domains — one of which is point-based and the other of which is set-based.Engineering and Physical Sciences Research Council (EPSRC
On the Exploitation of Search History and Accumulative Sampling in Robust Optimisation
This is the author accepted manuscript. The final version is available from ACM via the DOI in this record.Efficient robust optimisation methods exploit the search history when evaluating a new solution by using information from previously visited solutions that fall in the new solution’s uncertainty neighbourhood. We propose a full exploitation of the search history by updating the robust fitness approximations across the entire search history rather than a fixed population. Our proposed method shows promising results on a range of test problems compared with other approaches from the literature.This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/N017846/1]
Robust Multi-Modal Optimisation
Robust and multi-modal optimisation are two important topics that
have received significant attention from the evolutionary computation
community over the past few years. However, the two topics
have usually been investigated independently and there is a lack of
work that explores the important intersection between them. This is
because there are real-world problems where both formulations are
appropriate in combination. For instance, multiple ‘good’ solutions
may be sought which are distinct in design space for an engineering
problem – where error between the computational model queried
during optimisation and the real engineering environment is believed
to exist (a common justification for multi-modal optimisation)
– but also engineering tolerances may mean a realised design might
not exactly match the inputted specification (a robust optimisation
problem). This paper conducts a preliminary examination of such
intersections and identifies issues that need to be addressed for
further advancement in this new area. The paper presents initial
benchmark problems and examines the performance of combined
state-of-the-art methods from both fields on these problems.This work was supported by the Engineering and Physical Sciences
Research Council [grant number EP/N017846/1]
Robust Optimisation using Voronoi-Based Archive Sampling
Engineering and Physical Sciences Research Council (EPSRC
Optimisation and Landscape Analysis of Computational Biology Models: A Case Study
This is the author accepted manuscript. The final version is available from ACM via the DOI in this record.The parameter explosion problem is a crucial bottleneck in modelling gene regulatory networks (GRNs), limiting the size of models that can be optimised to experimental data. By discretising state, but not time, Boolean delay equations (BDEs) provide a signi ficant reduction in parameter numbers, whilst still providing dynamical complexity comparable to more biochemically detailed models, such as those based on differential equations. Here, we explore several approaches to optimising BDEs to timeseries data, using a simple circadian clock model as a case study. We compare the ffectiveness of two optimisers on our problem: a genetic algorithmf(GA) and an elite accumulative sampling (EAS) algorithm that provides robustness to data discretisation. Our results show that both methods are able to distinguish effectively between alternative architectures, yielding excellent ts to data. We also perform a landscape analysis, providing insights into the properties that determine optimiser performance (e.g. number of local optima and basin sizes). Our results provide a promising platform for the analysis of more complex GRNs, and suggest the possibility of leveraging cost landscapes to devise more effi cient optimisation schemes.This work was financially supported by the Engineering and Physical Sciences Research Council [grant numbers EP/N017846/1, EP/N014391/1], and made use of the Zeus and Isca supercomputing
facilities provided by the University of Exeter HPC Strategy